Back

In this study, the authors investigate the power of quantum neural networks and compare them to classical neural networks. They focus on two main aspects: capacity and trainability. The capacity of a model refers to its ability to express different relationships between variables, while trainability refers to the ease with which the model can be trained and optimized.

The authors start by discussing the power of models in general and how it is often quantified by their complexity. They introduce the concept of the Vapnik-Chervonenkis dimension, which is a complexity measure commonly used in statistical learning theory. However, they note that computing the Vapnik-Chervonenkis dimension in practice is difficult, and it has several unrealistic assumptions.

To overcome these limitations, the authors propose using the effective dimension as a capacity measure. The effective dimension takes into account the Fisher information, which describes the geometry of a model’s parameter space. They argue that the effective dimension is a robust capacity measure and use it to study the power of both classical and quantum neural networks.

The authors then discuss the issue of trainability in quantum neural networks. They mention the “barren plateau” phenomenon, where the loss landscape of a quantum model is flat and optimization becomes difficult. They explain that the presence of barren plateaus is related to the curvature of the loss function, which can be quantified by the Hessian or the Fisher information matrix. They propose using the Fisher information matrix to analyze the trainability of quantum neural networks.

To analyze the power and trainability of quantum neural networks, the authors conduct experiments using different models and architectures. They compare the Fisher information spectra of classical neural networks, an easy quantum model, and a quantum neural network. They find that the quantum neural network has a more uniform distribution of eigenvalues, indicating a higher capacity and faster training ability. They also train the quantum neural network on real quantum hardware and observe fast convergence and low training loss.

The authors conclude that quantum neural networks can have a higher capacity and faster training ability compared to classical neural networks. They highlight the importance of the feature map in a quantum model and its effect on the optimization landscape. They also acknowledge the challenges posed by hardware noise and the need for further research to understand the power of quantum models.

Overall, this study provides valuable insights into the power of quantum neural networks and highlights the potential advantages they offer in terms of capacity and trainability. The authors’ use of the effective dimension and Fisher information matrix provides a robust framework for comparing quantum and classical models. The findings contribute to the growing field of quantum machine learning and pave the way for future research in this area.

Words: 458